Server Architecture
This document explains the dual-server system design for the SuperSet Telegram Notification Bot. The system separates concerns into:
Telegram bot server: interactive commands and user session management
Webhook server: REST APIs for external integrations, web push subscriptions, and administrative endpoints
Scheduler server: automated update jobs (fetching data and sending notifications)
The architecture emphasizes decoupling, dependency injection, daemon mode operation, and clear inter-server communication patterns. It supports both polling-based Telegram bot and webhook-based integration, plus a dedicated scheduler for periodic tasks.
The repository organizes code by responsibility:
app/main.py: CLI entrypoint and command dispatch
app/servers/: FastAPI webhook server, Telegram bot server, and scheduler server
app/services/: Notification orchestration, channel implementations, database abstraction, and utilities
app/clients/: Database client and external API clients
app/runners/: Data ingestion and notification sending workflows
app/core/: Configuration, daemon utilities, and shared settings
docs/: Deployment, configuration, and operational guides
Diagram sources
Section sources
Telegram Bot Server: Handles user commands (/start, /help, /stop, /status, /stats, /noticestats, /userstats, /web), user registration and management, and admin commands via injected services.
Webhook Server: FastAPI-based REST server exposing health checks, web push subscription endpoints, notification dispatch, and statistics endpoints.
Scheduler Server: Runs automated update jobs (SuperSet + Emails) and official placement scraping on a cron schedule, independent of the Telegram bot.
Configuration and Daemon Utilities: Centralized settings, logging, daemon mode, and PID management for process lifecycle.
Services and Clients: Notification orchestration, channel implementations (Telegram, Web Push), database abstraction, and MongoDB client.
Section sources
The system is designed as a distributed, decoupled architecture:
CLI entrypoint (main.py) launches one of three modes: bot, webhook, or scheduler.
Bot server runs continuously in polling mode, responding to user commands and maintaining user sessions.
Webhook server exposes REST endpoints for external systems and internal admin tasks.
Scheduler server runs periodic jobs independently, fetching data and broadcasting notifications.
All servers share a common configuration and logging setup, and rely on dependency injection for services and clients.
Admin Tools"] BOT["Telegram Bot Server
(Polling)"] WEB["Webhook Server
(FastAPI)"] SCHED["Scheduler Server
(APScheduler)"] NOTIF["NotificationService"] TG["TelegramService"] WP["WebPushService"] DB["DatabaseService"] DBC["DBClient"] UPD["UpdateRunner"] NOTIF_RUN["NotificationRunner"] CLIENT --> WEB WEB --> NOTIF BOT --> DB SCHED --> UPD SCHED --> NOTIF_RUN NOTIF --> TG NOTIF --> WP DB --> DBC UPD --> DB NOTIF_RUN --> DB
Diagram sources
Telegram Bot Server#
Responsibilities:
Command routing (/start, /help, /stop, /status, /stats, /noticestats, /userstats, /web)
User registration and deactivation
Admin command delegation
Asynchronous polling loop with graceful shutdown
Dependency Injection:
DatabaseService, NotificationService, AdminTelegramService, PlacementStatsCalculatorService
Session and User Management:
Adds users on /start, deactivates on /stop, retrieves user status and stats
Error Handling:
Graceful shutdown, logging, and safe printing in daemon mode
Diagram sources
Section sources
Webhook Server (FastAPI)#
Responsibilities:
Health checks (/, /health)
Web push subscription management (/api/push/subscribe, /api/push/unsubscribe, /api/push/vapid-key)
Notification dispatch (/api/notify, /api/notify/telegram, /api/notify/web-push)
Statistics endpoints (/api/stats, /api/stats/placements, /api/stats/notices, /api/stats/users)
External integration webhook (/webhook/update)
Middleware and Routing:
CORS middleware
Dependency injection via app state and Depends
Error Handling:
HTTP exceptions with descriptive details
Validation via Pydantic models
Diagram sources
Section sources
Scheduler Server#
Responsibilities:
Scheduled update jobs (fetch SuperSet + Emails, send notifications)
Official placement data scraping
Independent operation from the Telegram bot
Scheduling:
Cron-based jobs at multiple times per day
Daily official placement scrape at noon IST
Execution:
Uses runners and services directly (no service injection)
- Update every hour
- Official scrape at noon"] Jobs --> Loop["Event Loop (keep running)"] Loop --> Trigger{"Cron Triggered?"} Trigger --> |Yes| RunUpdate["run_scheduled_update()"] RunUpdate --> FetchSS["fetch_and_process_updates()"] RunUpdate --> FetchEmails["_run_email_updates()"] RunUpdate --> SendTG["send_updates(telegram=True, web=False)"] Trigger --> |No| Loop SendTG --> Loop
Diagram sources
Section sources
Daemon Mode Operation and Process Management#
Daemon Utilities:
Double-fork daemonization, PID file management, status checks, and controlled stop
Separate logging for scheduler daemon
CLI Integration:
main.py supports daemon mode for bot and scheduler
Reinitializes logging after fork to ensure proper file handles
Diagram sources
Section sources
Inter-Server Communication Patterns#
No direct inter-server calls:
Bot server manages user sessions and commands
Webhook server exposes REST endpoints for external integrations
Scheduler server operates independently and uses runners/services directly
Shared infrastructure:
All servers use the same configuration and logging setup
Database access is centralized via DatabaseService and DBClient
Section sources
The system follows a layered dependency structure with clear inversion of control via dependency injection:
Servers depend on services, which depend on clients
Configuration and daemon utilities are shared across servers
Runners encapsulate workflows and are reused by both scheduler and CLI
Diagram sources
Section sources
Asynchronous design:
Bot server uses asynchronous polling
Scheduler uses AsyncIOScheduler for non-blocking jobs
Rate limiting and batching:
TelegramService applies rate limiting when broadcasting to users
Long messages are split to comply with Telegram limits
Efficient data fetching:
UpdateRunner pre-fetches existing IDs to minimize API calls
Selective enrichment of jobs reduces expensive operations
Resource isolation:
Separate daemon logs for bot and scheduler reduce contention
Scalability:
Webhook server can be horizontally scaled behind a load balancer
MongoDB can be sharded for high-volume operations
[No sources needed since this section provides general guidance]
Health checks:
Use GET /health on the webhook server to verify service availability
Logs:
Bot logs: logs/superset_bot.log
Scheduler logs: logs/scheduler.log
Daemon status:
Common issues:
Missing environment variables cause configuration errors
MongoDB connectivity failures require verifying MONGO_CONNECTION_STR
Telegram bot token or chat ID misconfiguration affects message delivery
Web push requires VAPID keys; missing keys disable web push
Section sources
The dual-server architecture cleanly separates concerns: the Telegram bot server focuses on user interactions, the webhook server exposes REST APIs for integrations, and the scheduler server automates data ingestion and notifications. The design leverages dependency injection, daemon mode, and shared configuration to achieve maintainability, scalability, and operability. With clear inter-server boundaries and robust error handling, the system supports both small deployments and larger-scale production environments.
[No sources needed since this section summarizes without analyzing specific files]
Deployment Considerations#
Choose deployment option based on environment and scale (Local/VPS, Docker, GitHub Actions, cloud platforms)
Use systemd or PM2 for process supervision and automatic restarts
Configure reverse proxy for webhook deployments and SSL certificates
Enable log rotation and automated backups for MongoDB
Section sources
Scaling Strategies#
Horizontal scaling:
Run multiple instances of the webhook server behind a load balancer
Use Kubernetes deployments with readiness/liveness probes
Database scaling:
Enable MongoDB sharding for high-volume collections
Operational scaling:
Separate bot and scheduler instances for independent scaling
Use separate process managers for each server
Section sources
Monitoring Approaches#
Health endpoints:
Use /health for liveness/readiness checks
Logging:
Tail logs for errors and warnings
Alerts:
Monitor health externally and send alerts on failure
Metrics:
Track unsent notices and send success/failure ratios
Section sources